99 research outputs found

    Learning Linear Temporal Properties

    Full text link
    We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. The first learning algorithm reduces the learning task to a series of satisfiability problems in propositional Boolean logic and produces a smallest LTL formula (in terms of the number of subformulas) that is consistent with the given data. Our second learning algorithm, on the other hand, combines the SAT-based learning algorithm with classical algorithms for learning decision trees. The result is a learning algorithm that scales to real-world scenarios with hundreds of examples, but can no longer guarantee to produce minimal consistent LTL formulas. We compare both learning algorithms and demonstrate their performance on a wide range of synthetic benchmarks. Additionally, we illustrate their usefulness on the task of understanding executions of a leader election protocol

    Robust Linear Temporal Logic

    Get PDF
    Although it is widely accepted that every system should be robust, in the sense that "small" violations of environment assumptions should lead to "small" violations of system guarantees, it is less clear how to make this intuitive notion of robustness mathematically precise. In this paper, we address this problem by developing a robust version of Linear Temporal Logic (LTL), which we call robust LTL and denote by rLTL. Formulas in rLTL are syntactically identical to LTL formulas but are endowed with a many-valued semantics that encodes robustness. In particular, the semantics of the rLTL formula φ⇒ψ\varphi \Rightarrow \psi is such that a "small" violation of the environment assumption φ\varphi is guaranteed to only produce a "small" violation of the system guarantee ψ\psi. In addition to introducing rLTL, we study the verification and synthesis problems for this logic: similarly to LTL, we show that both problems are decidable, that the verification problem can be solved in time exponential in the number of subformulas of the rLTL formula at hand, and that the synthesis problem can be solved in doubly exponential time

    Inferring Properties in Computation Tree Logic

    Full text link
    We consider the problem of automatically inferring specifications in the branching-time logic, Computation Tree Logic (CTL), from a given system. Designing functional and usable specifications has always been one of the biggest challenges of formal methods. While in recent years, works have focused on automatically designing specifications in linear-time logics such as Linear Temporal Logic (LTL) and Signal Temporal Logic (STL), little attention has been given to branching-time logics despite its popularity in formal methods. We intend to infer concise (thus, interpretable) CTL formulas from a given finite state model of the system in consideration. However, inferring specification only from the given model (and, in general, from only positive examples) is an ill-posed problem. As a result, we infer a CTL formula that, along with being concise, is also language-minimal, meaning that it is rather specific to the given model. We design a counter-example guided algorithm to infer a concise and language-minimal CTL formula via the generation of undesirable models. In the process, we also develop, for the first time, a passive learning algorithm to infer CTL formulas from a set of desirable and undesirable Kripke structures. The passive learning algorithm involves encoding a popular CTL model-checking procedure in the Boolean Satisfiability problem

    Robust Alternating-Time Temporal Logic

    Full text link
    In multi-agent system design, a crucial aspect is to ensure robustness, meaning that for a coalition of agents A, small violations of adversarial assumptions only lead to small violations of A's goals. In this paper we introduce a logical framework for robust strategic reasoning about multi-agent systems. Specifically, inspired by recent works on robust temporal logics, we introduce and study rATL and rATL*, logics that extend the well-known Alternating-time Temporal Logic ATL and ATL* by means of an opportune multi-valued semantics for the strategy quantifiers and temporal operators. We study the model-checking and satisfiability problems for rATL and rATL* and show that dealing with robustness comes at no additional computational cost. Indeed, we show that these problems are PTime-complete and ExpTime-complete for rATL, respectively, while both are 2ExpTime-complete for rATL*

    Optimally Resilient Strategies in Pushdown Safety Games

    Get PDF
    Infinite-duration games with disturbances extend the classical framework of infinite-duration games, which captures the reactive synthesis problem, with a discrete measure of resilience against non-antagonistic external influence. This concerns events where the observed system behavior differs from the intended one prescribed by the controller. For games played on finite arenas it is known that computing optimally resilient strategies only incurs a polynomial overhead over solving classical games. This paper studies safety games with disturbances played on infinite arenas induced by pushdown systems. We show how to compute optimally resilient strategies in triply-exponential time. For the subclass of safety games played on one-counter configuration graphs, we show that determining the degree of resilience of the initial configuration is PSPACE-complete and that optimally resilient strategies can be computed in doubly-exponential time
    • …
    corecore